对于涉及连续的,半监督的学习以进行长期监测的应用程序,高维计算(HDC)作为机器学习范式非常有趣。但是,其准确性尚未与其他机器学习(ML)方法相提并论。允许快速设计空间探索以找到实用算法的框架对于使高清计算与其他ML技术竞争是必要的。为此,我们介绍了HDTORCH,这是一个开源的,基于Pytorch的HDC库,其中包含用于HyperVector操作的CUDA扩展名。我们通过使用经典和在线HD培训方法来分析四个HDC基准数据集,从而证明了HDTORCH的实用程序。我们为经典/在线HD的平均(训练)/推理速度分别为(111x/68x)/87x。此外,我们分析了不同的超参数对运行时和准确性的影响。最后,我们演示了HDTORCH如何实现对大型现实世界数据集应用的HDC策略的探索。我们对CHB-MIT EEG癫痫数据库进行了首个高清训练和推理分析。结果表明,在一部分数据子集上训练的典型方法不一定会推广到整个数据集,这是开发医疗可穿戴设备的未来HD模型时的重要因素。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
我们利用最先进的机器学习方法和来自CFHT的十年的档案数据来预测来自环境条件和天文台操作参数的天文台图像质量(IQ)。具体而言,我们开发了数据特征之间复杂依赖性的准确和可解释的模型,并为CFHT的宽野相机,Megacam观察到IQ。我们的贡献是几倍。首先,我们收集,整理和重新处理CFHT科学家收集的几种不同数据集。其次,我们预测IQ的概率分布函数(PDF),实现预测中位数的$ \ sim0.07'$的平均绝对误差。第三,我们探讨了2013 - 14年安装的12个圆顶“通风口”的数据驱动,以加速来自圆顶的热空气的冲洗。我们与概率的生成建模结合使用认识和炼膜的不确定性,以确定是分布(ID)的候选通风调整;对于每个ID样本的最佳配置,我们预测所需观察时间的减少以实现固定的SNR。平均而言,减少是$ \ SIM12 \%$。最后,我们通过福谢值等级来缩放输入特征,以确定每个观察的最预测变量。我们的长期目标是构建可靠和实时模型,可以预测最佳的天文台操作参数来优化IQ。然后,我们可以将这些预测送入调度协议和预测性维护例程。我们预计这些方法将成为自动化天文台运营和维护的标准,即CFHT的继承者,Maunakea光谱探险家安装在未来十年。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Automatic differentiation (AD) is a technique for computing the derivative of a function represented by a program. This technique is considered as the de-facto standard for computing the differentiation in many machine learning and optimisation software tools. Despite the practicality of this technique, the performance of the differentiated programs, especially for functional languages and in the presence of vectors, is suboptimal. We present an AD system for a higher-order functional array-processing language. The core functional language underlying this system simultaneously supports both source-to-source forward-mode AD and global optimisations such as loop transformations. In combination, gradient computation with forward-mode AD can be as efficient as reverse mode, and the Jacobian matrices required for numerical algorithms such as Gauss-Newton and Levenberg-Marquardt can be efficiently computed.
translated by 谷歌翻译
Exploring the climate impacts of various anthropogenic emissions scenarios is key to making informed decisions for climate change mitigation and adaptation. State-of-the-art Earth system models can provide detailed insight into these impacts, but have a large associated computational cost on a per-scenario basis. This large computational burden has driven recent interest in developing cheap machine learning models for the task of climate model emulation. In this manuscript, we explore the efficacy of randomly wired neural networks for this task. We describe how they can be constructed and compare them to their standard feedforward counterparts using the ClimateBench dataset. Specifically, we replace the serially connected dense layers in multilayer perceptrons, convolutional neural networks, and convolutional long short-term memory networks with randomly wired dense layers and assess the impact on model performance for models with 1 million and 10 million parameters. We find average performance improvements of 4.2% across model complexities and prediction tasks, with substantial performance improvements of up to 16.4% in some cases. Furthermore, we find no significant difference in prediction speed between networks with standard feedforward dense layers and those with randomly wired layers. These findings indicate that randomly wired neural networks may be suitable direct replacements for traditional dense layers in many standard models.
translated by 谷歌翻译
We present AI-SDC, an integrated suite of open source Python tools to facilitate Statistical Disclosure Control (SDC) of Machine Learning (ML) models trained on confidential data prior to public release. AI-SDC combines (i) a SafeModel package that extends commonly used ML models to provide ante-hoc SDC by assessing the vulnerability of disclosure posed by the training regime; and (ii) an Attacks package that provides post-hoc SDC by rigorously assessing the empirical disclosure risk of a model through a variety of simulated attacks after training. The AI-SDC code and documentation are available under an MIT license at https://github.com/AI-SDC/AI-SDC.
translated by 谷歌翻译
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
在处理机器学习模型(例如图形神经网络(GNN))中的一批图表时,通常将几个小图组合到一个整体图中以加速处理并减少填充的开销。例如,这是PYG库中支持的。但是,小图的尺寸对于节点和边缘的数量可能会有很大的变化,因此,组合图的大小仍然可能有很大差异,尤其是对于小批量大小而言。因此,仍然产生过多的填充和浪费计算的成本。本文提出了一种新方法 - 元组包装 - 用于生成导致最小开销的批次。该算法扩展了最近引入的序列填料方法,以在(| nodes |,| edges |)的2D元组上工作。单调启发式词被应用于元组值的2D直方图,以定义填充直方图箱的优先级,以及目标以达到节点数量和边缘数量的限制。实验验证了多个数据集上算法的有效性。
translated by 谷歌翻译